38 research outputs found
Discriminative Training of Deep Fully-connected Continuous CRF with Task-specific Loss
Recent works on deep conditional random fields (CRF) have set new records on
many vision tasks involving structured predictions. Here we propose a
fully-connected deep continuous CRF model for both discrete and continuous
labelling problems. We exemplify the usefulness of the proposed model on
multi-class semantic labelling (discrete) and the robust depth estimation
(continuous) problems.
In our framework, we model both the unary and the pairwise potential
functions as deep convolutional neural networks (CNN), which are jointly
learned in an end-to-end fashion. The proposed method possesses the main
advantage of continuously-valued CRF, which is a closed-form solution for the
Maximum a posteriori (MAP) inference.
To better adapt to different tasks, instead of using the commonly employed
maximum likelihood CRF parameter learning protocol, we propose task-specific
loss functions for learning the CRF parameters.
It enables direct optimization of the quality of the MAP estimates during the
course of learning.
Specifically, we optimize the multi-class classification loss for the
semantic labelling task and the Turkey's biweight loss for the robust depth
estimation problem.
Experimental results on the semantic labelling and robust depth estimation
tasks demonstrate that the proposed method compare favorably against both
baseline and state-of-the-art methods.
In particular, we show that although the proposed deep CRF model is
continuously valued, with the equipment of task-specific loss, it achieves
impressive results even on discrete labelling tasks
Deep Convolutional Neural Fields for Depth Estimation from a Single Image
We consider the problem of depth estimation from a single monocular image in
this work. It is a challenging task as no reliable depth cues are available,
e.g., stereo correspondences, motions, etc. Previous efforts have been focusing
on exploiting geometric priors or additional sources of information, with all
using hand-crafted features. Recently, there is mounting evidence that features
from deep convolutional neural networks (CNN) are setting new records for
various vision applications. On the other hand, considering the continuous
characteristic of the depth values, depth estimations can be naturally
formulated into a continuous conditional random field (CRF) learning problem.
Therefore, we in this paper present a deep convolutional neural field model for
estimating depths from a single image, aiming to jointly explore the capacity
of deep CNN and continuous CRF. Specifically, we propose a deep structured
learning scheme which learns the unary and pairwise potentials of continuous
CRF in a unified deep CNN framework.
The proposed method can be used for depth estimations of general scenes with
no geometric priors nor any extra information injected. In our case, the
integral of the partition function can be analytically calculated, thus we can
exactly solve the log-likelihood optimization. Moreover, solving the MAP
problem for predicting depths of a new image is highly efficient as closed-form
solutions exist. We experimentally demonstrate that the proposed method
outperforms state-of-the-art depth estimation methods on both indoor and
outdoor scene datasets.Comment: fixed some typos. in CVPR15 proceeding
CRF Learning with CNN Features for Image Segmentation
Conditional Random Rields (CRF) have been widely applied in image
segmentations. While most studies rely on hand-crafted features, we here
propose to exploit a pre-trained large convolutional neural network (CNN) to
generate deep features for CRF learning. The deep CNN is trained on the
ImageNet dataset and transferred to image segmentations here for constructing
potentials of superpixels. Then the CRF parameters are learnt using a
structured support vector machine (SSVM). To fully exploit context information
in inference, we construct spatially related co-occurrence pairwise potentials
and incorporate them into the energy function. This prefers labelling of object
pairs that frequently co-occur in a certain spatial layout and at the same time
avoids implausible labellings during the inference. Extensive experiments on
binary and multi-class segmentation benchmarks demonstrate the promise of the
proposed method. We thus provide new baselines for the segmentation performance
on the Weizmann horse, Graz-02, MSRC-21, Stanford Background and PASCAL VOC
2011 datasets
Structured Learning of Tree Potentials in CRF for Image Segmentation
We propose a new approach to image segmentation, which exploits the
advantages of both conditional random fields (CRFs) and decision trees. In the
literature, the potential functions of CRFs are mostly defined as a linear
combination of some pre-defined parametric models, and then methods like
structured support vector machines (SSVMs) are applied to learn those linear
coefficients. We instead formulate the unary and pairwise potentials as
nonparametric forests---ensembles of decision trees, and learn the ensemble
parameters and the trees in a unified optimization problem within the
large-margin framework. In this fashion, we easily achieve nonlinear learning
of potential functions on both unary and pairwise terms in CRFs. Moreover, we
learn class-wise decision trees for each object that appears in the image. Due
to the rich structure and flexibility of decision trees, our approach is
powerful in modelling complex data likelihoods and label relationships. The
resulting optimization problem is very challenging because it can have
exponentially many variables and constraints. We show that this challenging
optimization can be efficiently solved by combining a modified column
generation and cutting-planes techniques. Experimental results on both binary
(Graz-02, Weizmann horse, Oxford flower) and multi-class (MSRC-21, PASCAL VOC
2012) segmentation datasets demonstrate the power of the learned nonlinear
nonparametric potentials.Comment: 10 pages. Appearing in IEEE Transactions on Neural Networks and
Learning System
An Efficient Dual Approach to Distance Metric Learning
Distance metric learning is of fundamental interest in machine learning
because the distance metric employed can significantly affect the performance
of many learning methods. Quadratic Mahalanobis metric learning is a popular
approach to the problem, but typically requires solving a semidefinite
programming (SDP) problem, which is computationally expensive. Standard
interior-point SDP solvers typically have a complexity of (with
the dimension of input data), and can thus only practically solve problems
exhibiting less than a few thousand variables. Since the number of variables is
, this implies a limit upon the size of problem that can
practically be solved of around a few hundred dimensions. The complexity of the
popular quadratic Mahalanobis metric learning approach thus limits the size of
problem to which metric learning can be applied. Here we propose a
significantly more efficient approach to the metric learning problem based on
the Lagrange dual formulation of the problem. The proposed formulation is much
simpler to implement, and therefore allows much larger Mahalanobis metric
learning problems to be solved. The time complexity of the proposed method is
, which is significantly lower than that of the SDP approach.
Experiments on a variety of datasets demonstrate that the proposed method
achieves an accuracy comparable to the state-of-the-art, but is applicable to
significantly larger problems. We also show that the proposed method can be
applied to solve more general Frobenius-norm regularized SDP problems
approximately
Towards Robust Curve Text Detection with Conditional Spatial Expansion
It is challenging to detect curve texts due to their irregular shapes and
varying sizes. In this paper, we first investigate the deficiency of the
existing curve detection methods and then propose a novel Conditional Spatial
Expansion (CSE) mechanism to improve the performance of curve text detection.
Instead of regarding the curve text detection as a polygon regression or a
segmentation problem, we treat it as a region expansion process. Our CSE starts
with a seed arbitrarily initialized within a text region and progressively
merges neighborhood regions based on the extracted local features by a CNN and
contextual information of merged regions. The CSE is highly parameterized and
can be seamlessly integrated into existing object detection frameworks.
Enhanced by the data-dependent CSE mechanism, our curve text detection system
provides robust instance-level text region extraction with minimal
post-processing. The analysis experiment shows that our CSE can handle texts
with various shapes, sizes, and orientations, and can effectively suppress the
false-positives coming from text-like textures or unexpected texts included in
the same RoI. Compared with the existing curve text detection algorithms, our
method is more robust and enjoys a simpler processing flow. It also creates a
new state-of-art performance on curve text benchmarks with F-score of up to
78.4.Comment: This paper has been accepted by IEEE International Conference on
Computer Vision and Pattern Recognition (CVPR 2019